perm filename HEWITT[F86,JMC] blob
sn#831440 filedate 1986-12-26 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \title{Metacritique of Mcdermott's Critique of Pure Reason}
C00029 ENDMK
Cā;
\title{Metacritique of Mcdermott's Critique of Pure Reason}
\author{*****************Still a Draft****************\\
\\
\\
Carl Hewitt\\
MIT Artificial Intelligence Laboratory \\
545 Technology Square \\
Cambridge, Massachusetts 02139\\}
\date{December 26, 1986}
\bigskip
\maketitle
\vspace{.5in}
\begin{abstract}
Drew McDermott has produced an interesting and valuable critique of
the Logicist Approach. However, I believe that his critique is
somewhat superficial in its analysis. This response applies and
extends some of the critical published work [Hewitt 1969] [Minsky
1975] [Hewitt and de Jong 1981, 1983] [Hewitt 1985, 1986] on the
Logicist Approach to extend and supplement McDermott's Critique.
\end{abstract}
\section{The Logicist Argument} I part company with step one of the
McDermott's version of ``The Logicist Argument'' which states ``It
starts from a premise that almost everyone in Artificial Intelligence
would accept, that programs must be based on a lot of knowledge.''
McDermott's step one is reminiscent of the following slogan which is
currently popular:
\begin{quote}
In the Knowledge lies the Power.
\end{quote}
I would like to turn the above slogan on its side and maintain
instead:
\begin{quote}
In the {\em Organization} lies the Power.
\end{quote}
Knowledge is usually not the most impfortant factor
in the power of an organization. Other attributes such as management
skills and effective execution are usually more important.
Furthermore a weak organization can have a tremendous amount
of knowledge provided to it and more available for the asking.
Organizations can literally choke on their knowledge.
The difference that organizational analysis makes is brought home by
applying it to McDermott's second step of the Logicist Argument which
is ``to assume that this knowledge must be represented somehow in the
program.'' Transforming the second step as ``to assume that this
knowledge must be represented somehow in the {\em organization}''
reveals that step two of the Logicist Argument is manifestly
problematical because organizations have dynamically emergent
capabilities that do not seem to be explicitly represented anywhere.
\section{Inconsistency and Conflict}
McDermott's critique of the 6 ``defenses'' of the Logicist Position is
good as far as it goes. Unfortunately it does not go far enough
because the 6 ``defenses'' only address the incompleteness of logical
deduction. They do not address the root problems: {\em inconsistency
and conflict}.
Contradictory beliefs and conflicting information and preferences are
engendered by the enormous interconnectivity and interdependence of
organizational knowledge that comes from multiple sources and
viewpoints. This interconnectivity makes it impossible to separate
knowledge of the organization's affairs into independent modules. The
knowledge of any physical object has extensive {\it spatiotemporal,
causal, terminological, evidential}, and {\it communicative}
connections with other aspects of the organization's affairs [Hewitt
1985 1986]. The interconnectivity generates an enormous network of
knowledge which is inherently inconsistent because of multiple sources
making contributions at different times and places.
\section {Microtheories as Tools in Organizations}
A microtheory is a relatively small, idealized, mathematical model
that embodies a model of some physical system. A microtheory should
be internally consistent and clearly demarcated. Any modification of
a microtheory is a new microtheory. General relativity, Peano
arithmetic, a spread sheet model of a company's projected sales, and a
Spice simulation of an integrated circuit are examples of
microtheories. Microtheories are simple because they have simple
axiomatizations. The model axiomatized by a microtheory may be
enormously complicated and even have recursively undecidable
properties. Computer systems will require hundreds of thousands of
microtheories in order to effectively participate in organizational
work.
In general organizations deal with {\em conflicting} and {\em
inconsistent} microtheories that cannot always be measured against one
another in a pointwise fashion. Debate and negotiation are used to
compare rival microtheories without assuming that there is a fixed
common standard of reference. There is no global axiomatic theory of
the world in which we live that gradually becomes more complete as
more microtheories are debugged and introduced. Instead each
problematical concrete situation is dealt with by using negotiation
and debate among the available overlapping, usually conflicting,
microtheories that are adapted to the situation at hand. For many
purposes in organizations, it is preferable to work with microtheories
which are small and oversimplified, rather than large and full of
caveats and conditions [Wimsatt: 1985].
\section{Strengths of Logical Deduction}
Logical deduction is a powerful tool for working {\em within} a
microtheory. The strengths of logical deduction include:
\begin{itemize}
\item {\bf Well Understood:} Logical deduction is a very well
understood and characterized process. Rigorous model theories exist
for many logics including the predicate calculus, intuitionistic
logics, and modal logics.
\item {\bf Validity Locally Decidable:} The validity of a deductive
proof is supposed to be timeless and acontextual. If a deductive
proof is valid at all, then it is supposed to be valid for all times
and places. The timeless and acontextual character of logical
deduction is a tremendous advantage in separating the proof creation
situation from the proof checking context. In this way the situation
of proof checking can be separated from the situation of proof
generation so that proof generation and proof checking can take place
in completely separate situations. In addition the proof is supposed
to be checkable solely from the text of the proof. In this way proofs
can be checked by multiple actors at different times and places adding
to the confidence in the deductions. The correctness of a logical
proof should be mechanically decidable from the proof inscription. In
this way the situation of proof creation, can be distinct from the
subsequent situations of proof-checking. In order to be algorithmic,
the proof checking process cannot require making any observations or
consulting any external sources of information. Consequently all of
the premises of each proof step as to place, time, objects, etc. must
be explicit. In effect a {\em situational closure} must be taken for
each deductive step and for the whole deductive proof tree. Proof
checking proceeds in a closed world in which the axioms and rules of
deductive inference have been laid out explicitly beforehand.
Nonmonotonic Logics provide schemas for closure operators on sets of
axioms which results in stronger more complete axiom sets. {\em For the
purposes of this analysis, the deductive proofs of Nonmonotonic Logics
are not fundamentally different than the proofs of any other deductive
system.}
\end{itemize}
The advantages of logical deductive reasoning within a microtheory are
enormously important. However, we need to look at the other side of the
coin to examine what is necessarily left out of the Logistic deductive
framework.
\section{Limitations of Logical Deduction}
The advantages logical deductive inference come at a tremendous cost:
{\em The validity of logical deductive proofs is independent of the
socio-spatio-temporal context in which they are created.}
Consider the proof of a statement S about the safety of the Diablo
Canyon Nuclear Plant on July 1, 1987.
\subsection{Logical Deductive Proofs are Acontextual}
The requirements of logical deduction require that validity of the
proof of S must be independent of whether the deduction takes place
after July 1, 1987 and thus concerns the past or takes place before
July 1, 1987 and concerns the future. Logical reasoning can be used
before the situation described by S to {\it predict} what might happen
around the nuclear plant. Or it can be used after the situation
described by S to {\it analyze} what did happen.
In either case the logical deductive proof is crucially independent of
whether July 1, 1987 lies in past or the future and therefore cannot
be taken into account. In other words, contrary to the hopes of the
Logicists, the reality of {\em now} cannot be introduced into logical
deductive proofs.
Nevertheless, keeping in mind their acontextual limitations, it is
extremely valuable to use logical deduction as important tool in the
analysis the internal structure of microtheories of the nuclear
plant's operation, its possible failure mechanisms, and their
consequences.
\subsection{Microtheories Contradict Each Other}
The multitude of microtheories concerning the safety of the Diablo
Canyon Nuclear Plant on July 1, 1987 inherently contradict one
another. Therefore the logical deductive inferences from any
microtheory are not very convincing by themselves because they are
contradicted by the inferences of competing microtheories. For
example SAFE(DIABLO-CANYON, JULY 1, 1987) is deducible in one
microtheory and NOT(SAFE(DIABLO-CANYON, JULY 1, 1987)) is deducible in
another. Logical deduction does not provide any way to resolve the
contradictions.
These inherent contradictions are among of the most prominent features
of the intellectual landscape. They are also the most ignored by the
Logicists! I have received no response when I have attempted to get
prominent Logicists to address the issue of inherent contradictions.
Could it be that the Logicist Emperor has no clothes? Do Logicists
really believe that knowledge concerning the safety of the diablo
canyon nuclear plant on July 1, 1987 {\em or the knowledge concerning
any other situated physical object} can somehow be consistently
axiomatized?
\subsection{Microtheories Are in Conflict with Each Other}
Over the years there have been some interesting attempts within the
Logicist Programme to use logical deductive inference to control
action. John McCarthy introduced a predicate named {\em SHOULD} so
that sentences such as SHOULD(WALK(I,CAR)) could be expressed. His
idea was that such deductive inferences concerning such predicates
could control what would be done.
However, microtheories for preferences are inherently in conflict with
each other because of the tradeoffs inherent in real situations.
Consider the conflicts inherent in setting the price for some product
(called X). In general greater profitability is preferable to lower
profitability and greater market share is preferable to lower market
share. Increasing the price tends to increase profitability but
decrease market share, thus creating an inherent conflict. In
practice this means that SHOULD(RAISE-PRICE(X)) will be deducible in
one microtheory and NOT(SHOULD(RAISE-PRICE(X))) in another. Logical
deduction provides no mechanism for resolving these conflicts.
\subsection{Greater Knowledge can Mean Greater Deductive Uncertainty}
Other Logicists [Weyhrauch: 1980, Geneserth: 1986] have attempted to
use metatheories as a control mechanism. For example Mike Geneserth
has introduced a function named OUGHT so that if a machine is in a
state described by microtheory T then OUGHT(T) is the microtheory
which describes the "next" state of the machine. The axiomatization
of the OUGHT function takes place in a metatheory which describes
the base level microtheories of the states of the machine.
However, the attempt to use logical deductive inference in
metatheories as a control mechanism has an underlying defective
assumption:
\begin{quote}
Increased knowledge about the state and inputs of a machine means
greater deductive knowledge of the subsequent state of the machine.
\end{quote}
The above assumption is contrary to the Heisenberg
Uncertainty Principle of modern physics which states:
\begin{quote}
For a machine M which is sensitive to order of arrival of inputs, the
greater the knowledge of the closeness of arrival of inputs to M, the
greater the uncertainty of the subsequent state of M.
\end{quote}
The Uncertainty Principle is of increasing practical importance as the
asynchronicity of large scale concurrent computers increases. As the
asynchronicity of such computers increases, it becomes increasing
difficult to logically deductively infer how they will behave even
granted complete knowledge of their structure, initial state, and
exact knowledge of all input.
\section{Due Process is an Alternative}
The Logistic Approach is further limited because the tree-structured
nonsituated locally decidable character of logical deductive proofs
means that audiences cannot be taken into account. Extradeductive
techniques such as negotiation and debate are needed to deal with the
inconsistencies and conflicts between microtheories.
For some time now, I have been investigating Due Process as an
alternative to logical deductive inference as a foundation for
organizational judgment and decision making. Due process is the
inherently reflective organizational activity of humans and computers
for generating sound, relevant, and reliable information as a basis
for decision and action within the constraints of allowable resources.
It provides an arena in which beliefs and proposals can be gathered,
analyzed and debated. The problem of due process is to assure that
organizations have appropriate mechanisms for gathering, recognizing,
weighing, evaluating, and negotiating conflicting alternatives [Gerson
and Star: 1986]. Part of due process is to provide a record of the
decision making process which can later be referenced.
Due process is inherently reflective in that beliefs, goals, plans,
requests, commitments, etc. exist as objects that can be explicitly
mentioned and manipulated in the ongoing process.
Due process does not make decision and judgments or take other
actions {\em per se\/}. Instead it is the process that informs
organizational action. Each instance of due process begins with {\em
preconceptions} handed down through traditions and culture that
constitute the initial process but which are open to future testing
and evolution. Decision making criteria such as preferences in
predicted outcomes are included in this knowledge base. For example,
increased profitability is preferable to decreased profitability.
Also, increased market share is preferable to decreased market share.
Conflicts between these preferences can be negotiated. In addition
preferences can arise as a result of conflict. Negotiating conflict
can bring the negotiating process itself into question as part of the
evaluative criteria of how to proceed which can itself change the
nature of the relationship among the participants [Gerson: 1976].
\section{Conclusion}
Any organizational belief is subject to internal and external
challenges. Organizations must efficiently take action and make
decisions in the face of conflicting information and contradictory
beliefs. How they do so is a fundamental consideration in the
foundations of organizational information systems. Logical deduction
is an extremely powerful and useful tool for exploring the structure,
consequences, and limitations of microtheories and models. However,
within organizations, judgments play a more central role than logical
reasoning, deductions, and inferences which are useful only {\em
within} microtheories. Furthermore the process of making
organizational judgments is not reducible to inference within
nonmonotonic, modal, omega order, or an other deductive inference
system.
The limitations of logical deduction and inference are far-reaching in
fundamental ways that are not encompassed by McDermott's Critique.
\section{Acknowledgments}
I would like to acknowledge the help of Richard Waldinger and Fanya
Montalvo in improving the presentation. I owe a tremendous debt to my
colleagues in the Message Passing Semantics Group (Gul Agha, Peter de
Jong, Carl Manning, and Tom Reinhardt), the Tremont Research Institute
(Elihu Gerson and Susan Leigh Star), and the MIT Artificial
Intelligence Laboratory (David Kirsh, Marvin Minsky, and Seymour
Papert).
This paper describes research done at the Artificial Intelligence
Laboratory of the Massachusetts Institute of Technology. Major
support for the research reported in this paper was provided by the
System Development Foundation. Major support for other related work
in the Artificial Intelligence Laboratory is provided, in part, by the
Advanced Research Projects Agency of the Department of Defense under
Office of Naval Research contract N0014-80-C-0505. I would like to
thank Carl York, Charles Smith, and Patrick Winston for their support
and encouragement.